The search functionality is under construction.

Keyword Search Result

[Keyword] object tracking(40hit)

21-40hit(40hit)

  • FPGA Implementation of Exclusive Block Matching for Robust Moving Object Extraction and Tracking

    Yoichi TOMIOKA  Ryota TAKASU  Takashi AOKI  Eiichi HOSOYA  Hitoshi KITAZAWA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E97-D No:3
      Page(s):
    573-582

    Hardware acceleration is an essential technique for extracting and tracking moving objects in real time. It is desirable to design tracking algorithms such that they are applicable for parallel computations on hardware. Exclusive block matching methods are designed for hardware implementation, and they can realize detailed motion extraction as well as robust moving object tracking. In this study, we develop tracking hardware based on an exclusive block matching method on FPGA. This tracking hardware is based on a two-dimensional systolic array architecture, and can realize robust moving object extraction and tracking at more than 100 fps for QVGA images using the high parallelism of an exclusive block matching method, synchronous shift data transfer, and special circuits to accelerate searching the exclusive correspondence of blocks.

  • Real-Time Tracking with Online Constrained Compressive Learning

    Bo GUO  Juan LIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:4
      Page(s):
    988-992

    In object tracking, a recent trend is using “Tracking by Detection” technique which trains a discriminative online classifier to detect objects from background. However, the incorrect updating of the online classifier and insufficient features used during the online learning often lead to the drift problems. In this work we propose an online random fern classifier with a simple but effective compressive feature in a framework integrating the online classifier, the optical-flow tracker and an update model. The compressive feature is a random projection from highly dimensional multi-scale image feature space to a low-dimensional representation by a sparse measurement matrix, which is expect to contain more information. An update model is proposed to detect tracker failure, correct tracker result and constrain the updating of online classifier, thus reducing the chance of wrong updating in online training. Our method runs at real-time and the experimental results show performance improvement compared to other state-of-the-art approaches on several challenging video clips.

  • Kernel-Based On-Line Object Tracking Combining both Local Description and Global Representation

    Quan MIAO  Guijin WANG  Xinggang LIN  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:1
      Page(s):
    159-162

    This paper proposes a novel method for object tracking by combining local feature and global template-based methods. The proposed algorithm consists of two stages from coarse to fine. The first stage applies on-line classifiers to match the corresponding keypoints between the input frame and the reference frame. Thus a rough motion parameter can be estimated using RANSAC. The second stage employs kernel-based global representation in successive frames to refine the motion parameter. In addition, we use the kernel weight obtained during the second stage to guide the on-line learning process of the keypoints' description. Experimental results demonstrate the effectiveness of the proposed technique.

  • Enhancing Memory-Based Particle Filter with Detection-Based Memory Acquisition for Robustness under Severe Occlusion

    Dan MIKAMI  Kazuhiro OTSUKA  Shiro KUMANO  Junji YAMATO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:11
      Page(s):
    2693-2703

    A novel enhancement for the memory-based particle filter is proposed for visual pose tracking under severe occlusions. The enhancement is the addition of a detection-based memory acquisition mechanism. The memory-based particle filter, called M-PF, is a particle filter that predicts prior distributions from past history of target state stored in memory. It can achieve high robustness against abrupt changes in movement direction and quick recovery from target loss due to occlusions. Such high performance requires sufficient past history stored in the memory. Conventionally, M-PF conducts online memory acquisition which assumes simple target dynamics without occlusions for guaranteeing high-quality histories of the target track. The requirement of memory acquisition narrows the coverage of M-PF in practice. In this paper, we propose a new memory acquisition mechanism for M-PF that well supports application in practical conditions including complex dynamics and severe occlusions. The key idea is to use a target detector that can produce additional prior distribution of the target state. We call it M-PFDMA for M-PF with detection-based memory acquisition. The detection-based prior distribution well predicts possible target position/pose even in limited-visibility conditions caused by occlusions. Such better prior distributions contribute to stable estimation of target state, which is then added to memorized data. As a result, M-PFDMA can start with no memory entries but soon achieve stable tracking even in severe conditions. Experiments confirm M-PFDMA's good performance in such conditions.

  • Efficient Topological Calibration and Object Tracking with Distributed Pan-Tilt Cameras

    Norimichi UKITA  Kunihito TERASHITA  Masatsugu KIDODE  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:2
      Page(s):
    626-635

    We propose a method for calibrating the topology of distributed pan-tilt cameras (i.e. the structure of routes among and within FOVs) and its probabilistic model. To observe as many objects as possible for as long as possible, pan-tilt control is an important issue in automatic calibration as well as in tracking. In a calibration period, each camera should be controlled towards an object that goes through an unreliable route whose topology is not calibrated yet. This camera control allows us to efficiently establish the topology model. After the topology model is established, the camera should be directed towards the route with the biggest possibility of object observation. We propose a camera control framework based on the mixture of the reliability of the estimated routes and the probability of object observation. This framework is applicable both to camera calibration and object tracking by adjusting weight variables. Experiments demonstrate the efficiency of our camera control scheme for establishing the camera topology model and tracking objects as long as possible.

  • Robust Tracking Using Particle Filter with a Hybrid Feature

    Xinyue ZHAO  Yutaka SATOH  Hidenori TAKAUJI  Shun'ichi KANEKO  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:2
      Page(s):
    646-657

    This paper presents a novel method for robust object tracking in video sequences using a hybrid feature-based observation model in a particle filtering framework. An ideal observation model should have both high ability to accurately distinguish objects from the background and high reliability to identify the detected objects. Traditional features are better at solving the former problem but weak in solving the latter one. To overcome that, we adopt a robust and dynamic feature called Grayscale Arranging Pairs (GAP), which has high discriminative ability even under conditions of severe illumination variation and dynamic background elements. Together with the GAP feature, we also adopt the color histogram feature in order to take advantage of traditional features in resolving the first problem. At the same time, an efficient and simple integration method is used to combine the GAP feature with color information. Comparative experiments demonstrate that object tracking with our integrated features performs well even when objects go across complex backgrounds.

  • Implementation of Scale and Rotation Invariant On-Line Object Tracking Based on CUDA

    Quan MIAO  Guijin WANG  Xinggang LIN  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2549-2552

    Object tracking is a major technique in image processing and computer vision. Tracking speed will directly determine the quality of applications. This paper presents a parallel implementation for a recently proposed scale- and rotation-invariant on-line object tracking system. The algorithm is based on NVIDIA's Graphics Processing Units (GPU) using Compute Unified Device Architecture (CUDA), following the model of single instruction multiple threads. Specifically, we analyze the original algorithm and propose the GPU-based parallel design. Emphasis is placed on exploiting the data parallelism and memory usage. In addition, we apply optimization technique to maximize the utilization of NVIDIA's GPU and reduce the data transfer time. Experimental results show that our GPGPU-based method running on a GTX480 graphics card could achieve up to 12X speed-up compared with the efficiency equivalence on an Intel E8400 3.0 GHz CPU, including I/O time.

  • SHOT: Scenario-Type Hypothesis Object Tracking with Indoor Sensor Networks

    Masakazu MURATA  Yoshiaki TANIGUCHI  Go HASEGAWA  Hirotaka NAKANO  

     
    PAPER-Information Network

      Vol:
    E94-D No:5
      Page(s):
    1035-1044

    In the present paper, we propose an object tracking method called scenario-type hypothesis object tracking. In the proposed method, an indoor monitoring region is divided into multiple closed micro-cells using sensor nodes that can detect objects and their moving directions. Sensor information is accumulated in a tracking server through wireless multihop networks, and object tracking is performed at the tracking server. In order to estimate the trajectory of objects from sensor information, we introduce a novel concept of the virtual world, which consists of virtual micro-cells and virtual objects. Virtual objects are generated, transferred, and deleted in virtual micro-cells according to sensor information. In order to handle specific movements of objects in micro-cells, such as slowdown of passing objects in a narrow passageway, we also consider the generation of virtual objects according to interactions among virtual objects. In addition, virtual objects are generated when the tracking server estimates loss of sensor information in order to decrease the number of object tracking failures. Through simulations, we confirm that the ratio of successful tracking is improved by up to 29% by considering interactions among virtual objects. Furthermore, the tracking performance is improved up to 6% by considering loss of sensor information.

  • Non-rigid Object Tracking as Salient Region Segmentation and Association

    Xiaolin ZHAO  Xin YU  Liguo SUN  Kangqiao HU  Guijin WANG  Li ZHANG  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:4
      Page(s):
    934-937

    Tracking a non-rigid object in a video in the presence of background clutter and partial occlusion is challenging. We propose a non-rigid object-tracking paradigm by repeatedly detecting and associating saliency regions. Saliency region segmentation is operated in each frame. The segmentation results provide rich spatial support for tracking and make the reliable tracking of non-rigid object without drifting possible. The precise object region is obtained simultaneously by associating the saliency region using two independent observers. Our formulation is quite general and other salient-region segmentation algorithms also can be used. Experimental results have shown that such a paradigm can effectively handle tracking problems of objects with rapid movement, rotation and partial occlusion.

  • A Vision-Based Emergency Response System with a Paramedic Mobile Robot

    Il-Woong JEONG  Jin CHOI  Kyusung CHO  Yong-Ho SEO  Hyun Seung YANG  

     
    PAPER

      Vol:
    E93-D No:7
      Page(s):
    1745-1753

    Detecting emergency situation is very important to a surveillance system for people like elderly live alone. A vision-based emergency response system with a paramedic mobile robot is presented in this paper. The proposed system is consisted of a vision-based emergency detection system and a mobile robot as a paramedic. A vision-based emergency detection system detects emergency by tracking people and detecting their actions from image sequences acquired by single surveillance camera. In order to recognize human actions, interest regions are segmented from the background using blob extraction method and tracked continuously using generic model. Then a MHI (Motion History Image) for a tracked person is constructed by silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. When an emergency is detected, a mobile robot can help to diagnose the status of the person in the situation. To send the mobile robot to the proper position, we implement mobile robot navigation algorithm based on the distance between the person and a mobile robot. We validate our system by showing emergency detection rate and emergency response demonstration using the mobile robot.

  • Robust Object Tracking via Combining Observation Models

    Fan JIANG  Guijin WANG  Chang LIU  Xinggang LIN  Weiguo WU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E93-D No:3
      Page(s):
    662-665

    Various observation models have been introduced into the object tracking community, and combining them has become a promising direction. This paper proposes a novel approach for estimating the confidences of different observation models, and then effectively combining them in the particle filter framework. In our approach, spatial Likelihood distribution is represented by three simple but efficient parameters, reflecting the overall similarity, distribution sharpness and degree of multi peak. The balance of these three aspects leads to good estimation of confidences, which helps maintain the advantages of each observation model and further increases robustness to partial occlusion. Experiments on challenging video sequences demonstrate the effectiveness of our approach.

  • Measuring Particles in Joint Feature-Spatial Space

    Liang SHA  Guijin WANG  Anbang YAO  Xinggang LIN  

     
    LETTER-Vision

      Vol:
    E92-A No:7
      Page(s):
    1737-1742

    Particle filter has attracted increasing attention from researchers of object tracking due to its promising property of handling nonlinear and non-Gaussian systems. In this paper, we mainly explore the problem of precisely estimating observation likelihoods of particles in the joint feature-spatial space. For this purpose, a mixture Gaussian kernel function based similarity is presented to evaluate the discrepancy between the target region and the particle region. Such a similarity can be interpreted as the expectation of the spatial weighted feature distribution over the target region. To adapt outburst of object motion, we also present a method to appropriately adjust state transition model by utilizing the priors of motion speed and object size. In comparison with the standard particle filter tracker, our tracking algorithm shows the better performance on challenging video sequences.

  • Wide View Imaging System Using Eight Random Access Image Sensors

    Kenji IDE  Ryusuke KAWAHARA  Satoshi SHIMIZU  Takayuki HAMAMOTO  

     
    PAPER-Image Sensor/Vision Chip

      Vol:
    E90-C No:10
      Page(s):
    1884-1891

    We have investigated real-time object tracking using a wide view imaging system. For the system, we have designed and fabricated new smart image sensor with four functions effective in wide view imaging, such as a random access function. In this system, eight smart sensors and an octagonal mirror are used and each image obtained by the sensors is equivalent to a partial image of the wide view. In addition, by using an FPGA for processing, the circuits in this system can be scaled down and a panoramic image can be obtained in real time. For object tracking using this system, the object-detection method based on background subtraction is used. When moving objects are detected in the panoramic image, the objects are constantly displayed on the monitor at higher resolution in real time. In this paper, we describe the random access image sensor and show some results obtained using this sensor. In addition, we describe the wide view imaging system using eight sensors. Furthermore, we explain the method of object tracking in this system and show the results of real-time multipl-object tracking.

  • Object Tracking with Target and Background Samples

    Chunsheng HUA  Haiyuan WU  Qian CHEN  Toshikazu WADA  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E90-D No:4
      Page(s):
    766-774

    In this paper, we present a general object tracking method based on a newly proposed pixel-wise clustering algorithm. To track an object in a cluttered environment is a challenging issue because a target object may be in concave shape or have apertures (e.g. a hand or a comb). In those cases, it is difficult to separate the target from the background completely by simply modifying the shape of the search area. Our algorithm solves the problem by 1) describing the target object by a set of pixels; 2) using a K-means based algorithm to detect all target pixels. To realize stable and reliable detection of target pixels, we firstly use a 5D feature vector to describe both the color ("Y, U, V") and the position ("x, y") of each pixel uniformly. This enables the simultaneous adaptation to both the color and geometric features during tracking. Secondly, we use a variable ellipse model to describe the shape of the search area and to model the surrounding background. This guarantees the stable object tracking under various geometric transformations. The robust tracking is realized by classifying the pixels within the search area into "target" and "background" groups with a K-means clustering based algorithm that uses the "positive" and "negative" samples. We also propose a method that can detect the tracking failure and recover from it during tracking by making use of both the "positive" and "negative" samples. This feature makes our method become a more reliable tracking algorithm because it can discover the target once again when the target has become lost. Through the extensive experiments under various environments and conditions, the effectiveness and efficiency of the proposed algorithm is confirmed.

  • A Robust Object Tracking Method under Pose Variation and Partial Occlusion

    Kazuhiro HOTTA  

     
    PAPER-Tracking

      Vol:
    E89-D No:7
      Page(s):
    2132-2141

    This paper presents a robust object tracking method under pose variation and partial occlusion. In practical environment, the appearance of objects is changed dynamically by pose variation or partial occlusion. Therefore, the robustness to them is required for practical applications. However, it is difficult to be robust to various changes by only one tracking model. Therefore, slight robustness to variations and the easiness of model update are required. For this purpose, Kernel Principal Component Analysis (KPCA) of local parts is used. KPCA of local parts is proposed originally for the purpose of pose independent object recognition. Training of this method is performed by using local parts cropped from only one or two object images. This is good property for tracking because only one target image is given in practical applications. In addition, the model (subspace) of this method can be updated easily by solving a eigen value problem. Performance of the proposed method is evaluated by using the test face sequence captured under pose, partial occlusion, scaling and illumination variations. Effectiveness and robustness of the proposed method are demonstrated by the comparison with template matching based tracker. In addition, adaptive update rule using similarity with current subspace is also proposed. Effectiveness of adaptive update rule is shown by experiment.

  • Semiautomatic Segmentation Using Spatio-Temporal Gradual Region Merging for MPEG-4

    Young-Ro KIM  Jae-Hwan KIM  Yoon KIM  Sung-Jea KO  

     
    PAPER-Source Coding/Image Processing

      Vol:
    E86-A No:10
      Page(s):
    2526-2534

    The video coding standard MPEG-4 is enabling content-based functionalities. It takes advantage of a prior decomposition of sequences into video object planes (VOP's) so that each VOP represents a semantic object. Therefore, the extraction of semantic video objects is crucial initial part. In this paper, we present an efficient region based semi-automatic segmentation system, which combines low level automatic region segmentation with interactive method for defining and tracking high level semantic video objects. The proposed segmentation system extracts accurate object boundaries using gradual region merging and bi-directional temporal boundary refinement. The system comprises of two steps: an initial object extraction step where user input in the starting frame is used to extract a semantic object; and an object tracking step where underlying regions of the semantic object are tracked and grouped through successive frames. Experiments with different types of videos show the efficiency of the proposed system in semantic object extraction.

  • Localization and Dynamic Tracking Using Wireless-Networked Sensors and Multi-Agent Technology: First Steps

    Zhidong DENG  Weixiong ZHANG  

     
    INVITED PAPER

      Vol:
    E85-A No:11
      Page(s):
    2386-2395

    We describe in this paper our experience of developing a large-scale, highly distributed multi-agent system using wireless-networked sensors. We provide solutions to the problems of localization (position estimation) and dynamic, real-time mobile object tracking, which we call PET problems for short, using wireless sensor networks. We propose system architectures and a set of distributed algorithms for organizing and scheduling cooperative computation in distributed environments, as well as distributed algorithms for localization and real-time object tracking. Based on these distributed algorithms, we develop and implement a hardware system and software simulator for the PET problems. Finally, we present some experimental results on distance measurement accuracy using radio signal strengths of the wireless sensors and discuss future work.

  • Orientation Code Matching for Robust Object Search

    Farhan ULLAH  Shun'ichi KANEKO  Satoru IGARASHI  

     
    PAPER

      Vol:
    E84-D No:8
      Page(s):
    999-1006

    A new method for object search is proposed. Conventional template matching schemes tend to fail in presence of irregularities and ill-conditions like background variations, illumination fluctuations resulting from shadowing or highlighting etc. The proposed scheme is robust against such irregularities in the real world scenes since it is based on matching gradient information around each pixel, computed in the form of orientation codes, rather than the gray levels directly. A probabilistic model for robust matching is given and verified by real image data. Experimental results for real world scenes demonstrate the effectiveness of the proposed method for object search in the presence of different potential causes of mismatches.

  • Robust Motion Tracking of Multiple Objects with KL-IMMPDAF

    Jungduk SON  Hanseok KO  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E84-D No:1
      Page(s):
    179-187

    This paper describes how the image sequences taken by a stationary video camera may be effectively processed to detect and track moving objects against a stationary background in real-time. Our approach is first to isolate the moving objects in image sequences via a modified adaptive background estimation method and then perform token tracking of multiple objects based on features extracted from the processed image sequences. In feature based multiple object tracking, the most prominent tracking issues are track initialization, data association, occlusions due to traffic congestion, and object maneuvering. While there are limited past works addressing these problems, most relevant tracking systems proposed in the past are independently focused to either "occlusion" or "data association" only. In this paper, we propose the KL-IMMPDA (Kanade Lucas-Interacting Multiple Model Probabilistic Data Association) filtering approach for multiple-object tracking to collectively address the key issues. The proposed method essentially employs optical flow measurements for both detection and track initialization while the KL-IMMPDA filter is used to accept or reject measurements, which belong to other objects. The data association performed by the proposed KL-IMMPDA results in an effective tracking scheme, which is robust to partial occlusions and image clutter of object maneuvering. The simulation results show a significant performance improvement for tracking multi-objects in occlusion and maneuvering, when compared to other conventional trackers such as Kalman filter.

  • Real-Time Tracking of Multiple Moving Object Contours in a Moving Camera Image Sequence

    Shoichi ARAKI  Takashi MATSUOKA  Naokazu YOKOYA  Haruo TAKEMURA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:7
      Page(s):
    1583-1591

    This paper describes a new method for detection and tracking of moving objects from a moving camera image sequence using robust estimation and active contour models. We assume that the apparent background motion between two consecutive image frames can be approximated by affine transformation. In order to register the static background, we estimate affine transformation parameters using LMedS (Least Median of Squares) method which is a kind of robust estimator. Split-and-merge contour models are employed for tracking multiple moving objects. Image energy of contour models is defined based on the image which is obtained by subtracting the previous frame transformed with estimated affine parameters from the current frame. We have implemented the method on an image processing system which consists of DSP boards for real-time tracking of moving objects from a moving camera image sequence.

21-40hit(40hit)